2 research outputs found

    Explainable Artificial Intelligence (XAI) for Intrusion Detection and Mitigation in Intelligent Connected Vehicles: A Review

    No full text
    The potential for an intelligent transportation system (ITS) has been made possible by the growth of the Internet of things (IoT) and artificial intelligence (AI), resulting in the integration of IoT and ITS—known as the Internet of vehicles (IoV). To achieve the goal of automatic driving and efficient mobility, IoV is now combined with modern communication technologies (such as 5G) to achieve intelligent connected vehicles (ICVs). However, IoV is challenged with security risks in the following five (5) domains: ICV security, intelligent device security, service platform security, V2X communication security, and data security. Numerous AI models have been developed to mitigate the impact of intrusion threats on ICVs. On the other hand, the rise in explainable AI (XAI) results from the requirement to inject confidence, transparency, and repeatability into the development of AI for the security of ICV and to provide a safe ITS. As a result, the scope of this review covered the XAI models used in ICV intrusion detection systems (IDSs), their taxonomies, and outstanding research problems. The results of the study show that XAI though in its infancy of application to ICV, is a promising research direction in the quest for improving the network efficiency of ICVs. The paper further reveals that XAI increased transparency will foster its acceptability in the automobile industry
    corecore